Find clients authenticating from unassigned AD subnets – using Defeder for Identity

A well maintained AD topology is very important because domain joined clients use this information to locate the optimal Domain Controller (DCLocator documentation here) – failing to find the most suitable domain controller will have performance impact on client side (slow logon, group policy processing, etc.). In an ideal world, when a new subnet is created and AD joined computers are placed here, AD admins are notified and they assign the subnet to the appropriate site – but sometimes this is not the case.

There are several methods to detect IP addresses coming from unassinged subnets:
– By analyzing the \\<dc>\admin$\debug\netlogon.log logfiles (example here)
– Looking for 5778 EventID in System log (idea from here)
– Using Powershell get all client registered DNS entries and look up against the replication subnets (some IP subnet calculator will be needed)

My idea was to use Defender for Identity logs (mainly because I recently (re)discovered the ipv4_lookup plugin in Kusto 🙃).

TL;DR
– by defining the ADReplicationSubnets as a datatable, we can find logon events from the IdentityLogonEvents table where clients use an IP address that is not in any replication subnet
– we can use a “static” datatable, or schedule a PowerShell script which will dynamically populate the items in this table

The query:

let IP_Data = datatable(network:string)
[
 "10.0.1.0/24", //example subnet1
"10.0.2.0/24", //example subnet2
"192.168.0.0/16", //example subnet3
];
IdentityLogonEvents
| where ActionType == @"LogonSuccess"
| where Protocol == @"Kerberos"
| summarize LogonCount=dcount(Timestamp) by IPAddress,DeviceName
| evaluate ipv4_lookup(IP_Data, IPAddress, network, return_unmatched = true)
| where isempty( network)

Quite simple, isn’t it? So we filter for successful Kerberos logon events (without Protocol filter, other logon events could generate noise) and use the ipv4_lookup function to look up the IP address in the “IP_Data” variable’s “network” column, including those entries that cannot be matched with any subnet – then filter for the unmatched entries.

Example result

Scheduling the query as a PowerShell script

So far, so good. But over time, the list of subnets may change, grow, etc. – how can this subnet list be dynamically populated? Using the Get-ADReplicationSubnet command for example. As a prerequisite I created an app registration with ThreatHunting.Read.All application permission (with a certificate as credential):

App registration for script scheduling

The following script is used:

#required scope: ThreatHunting.Read.All

##Connect Microsoft Graph using Certauth
$tenantID = '<tenantID>'
$clientID = '<clientID>'
$certThumbprint = "<certThumbprint>"

Connect-MgGraph -TenantId $tenantID -ClientId $clientID -CertificateThumbprint $certThumbprint

##Define hunting query
$huntingQuery = '
let IP_Data = datatable(network:string)
['+( (Get-ADReplicationSubnet -filter *).Name | % {'"' + $_ + '",'}) +'
];
IdentityLogonEvents
| where ActionType == @"LogonSuccess"
| where Protocol == @"Kerberos"
| summarize LogonCount=dcount(Timestamp) by IPAddress,DeviceName
| evaluate ipv4_lookup(IP_Data, IPAddress, network, return_unmatched = true)
| where isempty( network)
'

#construct payload with 7 days timespan
$body = @{Query = $huntingQuery
    Timespan = "P7D"
} | ConvertTo-Json

$url = "https://graph.microsoft.com/v1.0/security/runHuntingQuery"
#Run hunting query
$response = Invoke-MgGraphRequest -Method Post -Uri $url -Body $body

$results = foreach ($result in $response.results){
    [pscustomobject]@{
        IPAddress = $result.IpAddress
        DeviceName = $result.DeviceName
        LogonCount = $result.LogonCount
        }
}

$results

The hunting query is the same as above, but the datatable entries are populated by the results of the Get-ADReplicationSubnet command (and some dirty string formatting like adding quotation marks and a column). In the $body variable the Timespan is set to seven days (ISO 8601 format) – when Timespan is not set, it defaults to 30 days (reference)

Running the script

From this point, it is up to you to schedule the script (or fine tune the output) and email the results. 😊

Extra hint: if you have a multi-domain environment, the hunting query may need to be “domain specific” – for this purpose I would insert the following filter: | where AdditionalFields.Spns == “krbtgt/<domainDNSName>”, for example:

IdentityLogonEvents
| where ActionType == @"LogonSuccess"
| where Protocol == @"Kerberos"
| where AdditionalFields.Spns == "krbtgt/F12.HU"
| summarize LogonCount=dcount(Timestamp) by IPAddress,DeviceName
| evaluate ipv4_lookup(IP_Data, IPAddress, network, return_unmatched = true)
| where isempty( network)

Tracking Microsoft Secure Score changes

Microsoft Secure Score can be a good starting point in assessing organizational security posture. Improvement actions are added to the score regularly (link) and points achieved are updated dynamically.

For me, Secure score is a mesurement of hard work represented in little percentage points. Every little point is a reward which can be taken back by Microsoft when changes happen in the current security state (let it be the result of an action [ie. someone enabled the printer spooler on a domain controller] – or inactvity [ie. a domain admin account became “dormant”]). Whatever is the reason of the score degradation, I want to be alerted, because I don’t want to check this chart on a daily basis. Unfortunately, I didn’t find any ready-to-use solution, so I’m sharing my findings.

TL;DR
Get-MgSecuritySecureScore Graph PowerShell cmdlet can be used to fetch 90 days of score data
-The basic idea is to compare the actual scores with yesterday’s scores and report on differences
-When new controlScores (~recommendations) arrive, send separate alert
-The script I share is a PowerShell script with certificate auth, but no Graph PowerShell cmdlets are used just native REST API calls (sorry, I still have issues with Graph PS while native approach is consistent). Using app auth with certificate, the script can be scheduled to run on a daily basis (I don’t recommend a more frequent schedule as there are temporary score changes which are mostly self-remediating)

Prerequisites
We will need an app registration with Microsoft Graph/SecurityEvents.Read.All Application permission (don’t forget the admin consent):

App registration with SecurityEvents.Read.All permission

On the server on which you are planning to schedule the script, create a new certificate. Example PowerShell command*:

New-SelfSignedCertificate -FriendlyName "F12 - Secure score monitor" -NotAfter (Get-date).AddYears(2) -Subject "F12 - Secure score monitor" -CertStoreLocation Cert:\LocalMachine\My -Provider “Microsoft Enhanced RSA and AES Cryptographic Provider” -KeyExportPolicy NonExportable

Don’t forget to grant read access to the private key for the account which will run the schedule. Right click on the certificate – All Tasks – Manage Private Keys…

I prefer to use “Network Service” for these tasks because limited permissions are needed

Export the certificate’s public key and upload it to the app registration’s certificates:

Let’s move on to the script.

The script

Some variables and actions need to be modified, like $tenantID, $appID and $certThumbprint in the first lines. Also, the notification part (Send-MailMessage lines) needs to be customized to your needs.
The script itself can be breaken down as follows:
– authenticate to Graph using certificate (the auth function is from MSEndpointMgr.com)
– the following to lines query the Secure Score data for today and yesterday:
$url = 'https://graph.microsoft.com/beta/security/securescores?$top=2'
$webResponse = Invoke-RestMethod -Method Get -Uri $url -Headers $headers -ErrorAction Stop

– some HTML style for readable emails
– compare today’s and yesterday’s controlscores – alert when there are new / deprecated recommendations
– compare today’s scores with yesterday’s scores – alert when changes are detected

Here it is:

$tenantId = '<your tenant ID>'
$appID = '<application ID with SecurityEvents.Read.All admin consented permission>'
$certThumbprint = '<thumbprint of certificate used to connect>'
$resourceAppIdUri = 'https://graph.microsoft.com'

#region Auth
$cert = gci Cert:\LocalMachine\my\$certThumbprint
$cert64Hash = [System.Convert]::ToBase64String($cert.GetCertHash())
function Get-Token {
    #https://msendpointmgr.com/2023/03/11/certificate-based-authentication-aad/
    #create JWT timestamp for expiration 
    $startDate = (Get-Date "1970-01-01T00:00:00Z" ).ToUniversalTime()  
    $jwtExpireTimeSpan = (New-TimeSpan -Start $startDate -End (Get-Date).ToUniversalTime().AddMinutes(2)).TotalSeconds  
    $jwtExpiration = [math]::Round($jwtExpireTimeSpan, 0)  
  
    #create JWT validity start timestamp  
    $notBeforeExpireTimeSpan = (New-TimeSpan -Start $StartDate -End ((Get-Date).ToUniversalTime())).TotalSeconds  
    $notBefore = [math]::Round($notBeforeExpireTimeSpan, 0)  
  
    #create JWT header  
    $jwtHeader = @{  
        alg = "RS256"  
        typ = "JWT"  
        x5t = $cert64Hash -replace '\+', '-' -replace '/', '_' -replace '='  
    }
    #create JWT payload  
    $jwtPayLoad = @{  
        aud = "https://login.microsoftonline.com/$TenantId/oauth2/token"  
        exp = $jwtExpiration   
        iss = $appID  
        jti = [guid]::NewGuid()   
        nbf = $notBefore  
        sub = $appID  
    }  
  
    #convert header and payload to base64  
    $jwtHeaderToByte = [System.Text.Encoding]::UTF8.GetBytes(($jwtHeader | ConvertTo-Json))  
    $encodedHeader = [System.Convert]::ToBase64String($jwtHeaderToByte)  
  
    $jwtPayLoadToByte = [System.Text.Encoding]::UTF8.GetBytes(($jwtPayLoad | ConvertTo-Json))  
    $encodedPayload = [System.Convert]::ToBase64String($jwtPayLoadToByte)  
  
    #join header and Payload with "." to create a valid (unsigned) JWT  
    $jwt = $encodedHeader + "." + $encodedPayload  
  
    #get the private key object of your certificate  
    $privateKey = ([System.Security.Cryptography.X509Certificates.RSACertificateExtensions]::GetRSAprivateKey($cert))  
  
    #define RSA signature and hashing algorithm  
    $rsaPadding = [Security.Cryptography.RSASignaturePadding]::Pkcs1  
    $hashAlgorithm = [Security.Cryptography.HashAlgorithmName]::SHA256  
  
    #create a signature of the JWT  
    $signature = [Convert]::ToBase64String(  
        $privateKey.SignData([System.Text.Encoding]::UTF8.GetBytes($jwt), $hashAlgorithm, $rsaPadding)  
    ) -replace '\+', '-' -replace '/', '_' -replace '='  
  
    #join the signature to the JWT with "."  
    $jwt = $jwt + "." + $signature  
  
    #create a hash with body parameters  
    $body = @{  
        client_id             = $appID
        resource              = $resourceAppIdUri
        client_assertion      = $jwt  
        client_assertion_type = "urn:ietf:params:oauth:client-assertion-type:jwt-bearer"  
        scope                 = $scope  
        grant_type            = "client_credentials"  
  
    } 
    $url = "https://login.microsoft.com/$TenantId/oauth2/token"  
  
    #use the self-generated JWT as Authorization  
    $header = @{  
        Authorization = "Bearer $jwt"  
    }  
  
    #splat the parameters for Invoke-Restmethod for cleaner code  
    $postSplat = @{  
        ContentType = 'application/x-www-form-urlencoded'  
        Method      = 'POST'  
        Body        = $body  
        Uri         = $url  
        Headers     = $header  
    }  
  
    $request = Invoke-RestMethod @postSplat  

    #view access_token  
    $request
}
$accessToken = (Get-Token).access_token

 $headers = @{ 
    'Content-Type' = 'application/json'
    'Accept' = 'application/json'
    'Authorization' = "Bearer $accessToken" 
    }
#region end

$url = 'https://graph.microsoft.com/beta/security/securescores?$top=2'
$webResponse = Invoke-RestMethod -Method Get -Uri $url -Headers $headers -ErrorAction Stop

#HTML Style for table reports
$Style = @'
<style>
table{
border-collapse: collapse;
border-width: 2px;
border-style: solid;
border-color: grey;
color: black;
margin-bottom: 10px;
text-align: left;
}
th {
    background-color: #0000ff;
    color: white;
    border: 1px solid black;
    margin: 10px;
}
td {
    border: 1px solid black;
    margin: 10px;
}
</style>
'@


$controlScoreChanges = Compare-Object ($webResponse.value[0].controlScores.controlname) -DifferenceObject ($webResponse.value[1].controlScores.controlname) 
$report_controlScoreChanges = if ($controlScoreChanges){
    foreach ($control in $controlScoreChanges){
        [pscustomobject]@{
        State = switch ($control.sideindicator){"<=" {"New"} "=>" {"Removed"}}
        Category = $webresponse.value[0].controlScores.where({$_.controlname -eq ($control.inputobject)}).controlCategory
        Name = $control.inputobject
        Description = $webresponse.value[0].controlScores.where({$_.controlname -eq ($control.inputobject)}).description
        }
    }
    
}

if ($report_controlScoreChanges){
    [string]$body = $report_controlScoreChanges | ConvertTo-Html -Head $Style
    Send-MailMessage -To "<address>" -From "<address>" -Subject "Secure Score control changes detected" -Body $body -SmtpServer "<SMTP server address>" -Port 25 -BodyAsHtml

}

$ErrorActionPreference= 'silentlycontinue'
$report_scoreChanges = foreach ($controlscore in $webResponse.value[0].controlscores){
  if ( Compare-Object $controlscore.score -DifferenceObject ($webResponse.value[1].controlScores.where({$_.controlname -eq ($controlscore.controlname)}).score)){
        [pscustomobject]@{
            date = $controlscore.lastSynced
            controlCategory = $controlscore.controlCategory
            controlName = $controlscore.controlName
            scoreChange = ($controlscore.score) - (($webResponse.value[1].controlScores.where({$_.controlname -eq ($controlscore.controlname)})).score)
            description = $controlscore.description
            }
        }
    }

if ($report_ScoreChanges){
    [string]$body = $report_ScoreChanges | ConvertTo-Html -Head $Style
    Send-MailMessage -To "<address>" -From "<address>" -Subject "Secure Score changes detected" -Body $body -SmtpServer "<SMTP server address>" -Port 25 -BodyAsHtml

}

Some example results:

New recommendations (Defender for Identity fresh install -> new MDI recommendations)
Score changes by recommendation

Fun fact:
The Defender portal section where these score changes are displayed actually uses a “scoreImpactChangeLogs” node for these changes, but unfortunately I didn’t find a way to query this secureScoresV2 endpoint:

https://security.microsoft.com/apiproxy/mtp/secureScore/security/secureScoresV2?$top=400

I hope it means that these informations will be available via Graph so that no calculations will be needed to detect score changes.

Reporting on Entra Application Proxy published applications – Graph PowerShell

I thought it will be a quick Google search to find a PowerShell script that will give a report on applications published via Entra application proxy, but I found only scripts (link1, link2, link3) using the AzureAD PowerShell module – so I decided to write a new version using Graph PowerShell.

The script:

#Requires Microsoft.Graph.Beta.Applications
Connect-MgGraph

$AppProxyConnectorGroups = Get-MgBetaOnPremisePublishingProfileConnectorGroup -OnPremisesPublishingProfileId applicationproxy

$AppProxyPublishedApps = foreach ($connector in $AppProxyConnectorGroups){
Get-MgBetaOnPremisePublishingProfileConnectorGroupApplication -connectorgroupid $connector.id -OnPremisesPublishingProfileId applicationproxy | % {
    $onpremisesPublishingInfo = (Get-MgBetaApplication -applicationID $_.id -Property onpremisespublishing).onpremisespublishing
    [pscustomobject]@{
        DisplayName = $_.DisplayName
        Id = $_.id
        AppId = $_.appid
        ExternalURL = $onpremisesPublishingInfo.ExternalURL
        InternalURL = $onpremisesPublishingInfo.InternalURL
        ConnectorGroupName = $connector.name
        ConnectorGroupId = $connector.id

    }
}
}

$AppProxyPublishedApps

Some story

Entra portal is still using the https://main.iam.ad.ext.azure.com/api/ApplicationProxy/ConnectorGroups endpoint to display the connector groups:

So the next step was to figure out if there are some Graph API equivalents. Google search: graph connectorgroups site:microsoft.com led me to this page: https://learn.microsoft.com/en-us/graph/api/connectorgroup-list?view=graph-rest-beta&preserve-view=true&tabs=http
From this point it was “easy” to follow the logic of previously linked scripts and “translate” AzureAD PowerShell commands to Graph PS.

Note: as per the documentation, Directory.ReadWrite.All permission is required and only delegated permissions work.

As an alternative, I share the original script that did not use these commands from Microsoft.Graph.Beta.Applications

Connect-MgGraph

$AppProxyConnectorGroups = Invoke-MgGraphRequest -Uri 'https://graph.microsoft.com/beta/onPremisesPublishingProfiles/applicationproxy/connectorgroups' -Method GET

$AppProxyPublishedApps = foreach ($connector in $AppProxyConnectorGroups.value){
  $publishedApps =  Invoke-MgGraphRequest -Uri "https://graph.microsoft.com/beta/onPremisesPublishingProfiles/applicationproxy/connectorgroups/$($connector.id)/applications" -Method GET
  foreach ($app in $publishedApps.value){
  [PSCustomObject]@{
    DisplayName = $app.DisplayName
    id = $app.id
    appId = $app.appId
    ConnectorGroupName = $connector.name
    ConnectorGroupID = $connector.id
  }
 }
}

$AppProxyReport = foreach ($publishedApp in $AppProxyPublishedApps){
    $onpremisesPublishingInfo = Invoke-MgGraphRequest -Uri "https://graph.microsoft.com/beta/applications/$($publishedApp.id)?`$select=onpremisespublishing" -Method GET
    [PSCustomObject]@{
        DisplayName = $publishedApp.DisplayName
        id = $publishedApp.id
        appid = $publishedApp.appId
        ConnectorGroupName = $publishedApp.ConnectorGroupName
        ConnectorGroupID = $publishedApp.ConnectorGroupID
        ExternalURL = $onpremisesPublishingInfo.onPremisesPublishing.externalUrl
        InternalURL = $onpremisesPublishingInfo.onPremisesPublishing.internalUrl
        externalAuthenticationType = $onpremisesPublishingInfo.onPremisesPublishing.externalAuthenticationType
    }
}

Playing with Microsoft Passport Key Storage Provider – protect user VPN certificates with Windows Hello for Business?

I’m really into this Windows Hello for Business topic… Recently, I was going through the “RDP with WHfB” guide from MS Learn (link) which gave me an idea: can this method be used to protect user VPN certificates? The short answer is: yes, but no 🙂

TL;DR
– Depending on your current infrastructure, several options are available to protect VPN with MFA: Azure MFA NPS extension, SAML-auth VPN with Conditional Access, Entra ‘mini-CA’ Conditional Access
– Hello for Business can be used to protect access to certificates, why not use it to protect VPN certs?

Protecting VPN with MFA with Microsoft tools

NPS Extension
The most popular option I know to protect VPN with MFA is the Azure MFA NPS extension (link). The logic is very simple: the RADIUS request coming to the NPS server is authenticated against Active Directory, then the NPS extension is doing a secondary authentication (Azure MFA).

SAML-based authentication with Conditional Access
This depends on the vendor of the VPN appliance, but the mechanism is that an Enterprise application is created in Entra and Conditional Access policy can be applied to it.

Conditional Access VPN
There is another option which is called “Conditional Access VPN connectivity” in Entra – and by the way it seems to me that Microsoft is hiding this option (I guess it’s because it is using Azure Active Directory Graph which is deprecated). I found a photo how it looked like in the old days (picture taken from here):

In the Entra portal this option is not visible (at least for me):

But when using the search bar, the menu can be found:

Some documentation links about this feature:

  • Conditional Access Framework and Device Compliance for VPN (link)
  • Conditional access for VPN connectivity using Microsoft Entra ID (link)
  • VPN and conditional access (link)

The mechanism in short: Entra creates a ‘mini-CA’ which issues a short-lived certificates to clients; when a Windows VPN client is configured to use DeviceCompliance flow, the client attempts to get a certificate from Entra before connecting to the VPN endpoint (from an admin standpoint a ‘VPN Server’ application is created in Entra and conditional access policies can be applied to this application – I’m not going into details about this one, mainly because I encountered a lot of inconsitencies in the user experience when testing this solution 🙃) – and when everything is OK, the user gets a short-lived certificate which can be used for authentication (eg. EAP-TLS)
Some screenshots about this:

Conditional Access policy evaluation result

Certificate valid for ~1 hour

VPN Certificate created with Microsoft Passport KSP
Disclaimer: it is not an official/supported by Microsoft method to use VPN certificates for authentication, I tested it only for entertainment purposes.

This was the initial trigger of this post – based on the “Remote Desktop sign-in with Windows Hello for Business” tutorial, create VPN certificates using the Microsoft Passport KSP (link). The process is straigthforward:
– create the VPN certificate template (or duplicate the one you already have)
– export the template to a txt file
– modify the pKIDefaultCSPs setting to Microsoft Passport Key Storage Provider
– update the template with the new setting

User experience: well, if the user is WHfB enrolled and logs in with WHfB then nothing changes (the certificate is used “silently” upon connecting) – but when using password to log in to Windows, the VPN connection prompts for Hello credentials:

So if Hello for Business can be considered a multi-factor authentication method, then this solution fits as well 🙂

Convenience PIN policy enables Windows Hello for Business enrollment in Windows Security

Windows Hello for Business and Windows Hello may sound siblings, but they are actually two different families in authentication world (link)*. Hello is basicly using password caching while Hello for Business uses asymmetric authentication (key or certificate based) – that’s why Windows Hello for Business (WHfB) has some infrastructure prerequisites in az on-premises or hybrid environment. Not every environment is prepared for WHfB, hence some organizations may have opted to enable convenience PIN for their users to make sign-in… well… more convenient.
Why does it matter?
Because users may encounter errors during WHfB enrollment, WHfB has impact on Active Directory infrastructure, WHfB is a strong authentication method (~considered as MFA in Conditional Access policy evaluation) and so on.

*the common thing about Hello and WHfB is the Credential Provider: users see the PIN/biometric authentication option on their logon screen

TL;DR
Turn on convenience PIN sign-in policy enables Hello PIN in Account settings, but invokes Hello for Business enrollment when setting up in Windows Security app
– Hello for Business implementation is very simple (and preferred over Hello) with Cloud Kerberos Trust, but migrating users from Hello has some pitfalls
– Hello usage can be detected in the following registry hive:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\
Authentication\LogonUI\NgcPin\Credentials\<userSID>

Behavior
Let’s assume that WHfB is not configured in your environment, even the Intune default policy for WHfB is set to “Not configured” like this:

On a client device, the eligibility for WHfB can be checked using dsregcmd /status under “Ngc Prerequisite Check” (link). On a domain joined/hybrid joined device, the PreReqResult will have the WillNotProvision value until WHfB is explicitly enabled.

When you open Settings – Accounts – Sign-in options, you will see that PIN (Windows Hello) is greyed out nor Windows Security app will display options to set up Hello:

Now let’s enable convenience PIN sign-in group policy: Computer Configuration – Administrative Templates – System – Logon – Turn on convenience PIN sign-in

The Windows Security traybar icon almost immediately shows a warning status:

The Hello enrollment is now active in the Settings- Accounts – Sign-in options menu and we also have the option to set up Hello in Windows Security:

And here lies the discrepancy in the enrollment behavior: the Settings menu (left) sets up Hello, while Windows Security app (right) will invoke the WHfB enrollment process

Windows Hello setup using Settings menu
Windows Security invoking Hello for Business enrollment

Migrating from Hello to Hello for Business
At this point, we may decide to prevent Hello for Business – but I suggest to follow the other direction and migrate Hello users to Hello for Business. Since we have Cloud Kerberos Trust, we don’t need a PKI either, only (at least one) Windows 2016 or newer Domain Controllers (and hybrid joined devices with hybrid identites with MFA registration of course)[link]… so the deployment is very easy… but migration can be a bit tricky.

First, when a Hello for Business policy is applied on a computer, the credential provider (~the login screen asking for PIN) is disabled for the user until WHfB enrollment. This means that the user will be asked for password instead of PIN – this may result in failed logon attempts, because users will probably enter their PIN “as usual”.
Another issue that you may encounter is related to the previous and the applied PIN policy. Based on my experience, the WHfB enrollment process is prompting the current PIN and tries to set it as the new PIN (from a user experience standpoint, this was a clever decision from Microsoft), but if the new policy requires a more complex PIN, the process may encounter an error (0x801c0026 not documented here)

Convenience PIN migration to Hello for Business PIN error

This error is handled by the logon screen:

Detecting Hello usage
As problems may occour with Hello to WHfB migration, it’s a good idea to have an inventory about Hello users. On every device, each Hello registration is stored under the following registry hive: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion \Authentication\LogonUI\NgcPin\Credentials\<userSID>

It’s up to your creativity how you collect this information and translate the SIDs to some human readable format 🙂

[Suggested article: Query Windows Hello for Business registrations and usage]

Hunting for report-only (Microsoft-managed) Conditional Access impacts

Microsoft is rolling out the managed conditional access policies (link) gradually and I wanted to know how it is going to impact the users (which users to be exact). Apparently, if the Sign-in logs are not streamed to a Log Analytics Workspace, the options are limited – but if you have the AADSignInEventsBeta table under Advanced hunting on the Microsoft Defender portal, some extra info can be gathered.

Streaming Entra logs to Log Analytics gives wonderful insights (not only for Conditional Access), so it is recommended to set up the diagnostic settings. If it is not an option, but the AADSignInEventsBeta is available (typically organizations with E5 licences), then the following query will show those sign-ins that would have been impacted by a report-only Conditional Access policy:

AADSignInEventsBeta
| where LogonType has "InteractiveUser"
| mv-apply todynamic(ConditionalAccessPolicies) on (
where ConditionalAccessPolicies.result == "reportOnlyInterrupted" or ConditionalAccessPolicies.result == "reportOnlyFailure"
| where ConditionalAccessPolicies.displayName has "Microsoft-managed:" //filter for managed Conditional Access policies
| extend CADisplayName = tostring(ConditionalAccessPolicies.displayName)
| extend CAResult = tostring(ConditionalAccessPolicies.result))
| distinct Timestamp,RequestId,Application,ResourceDisplayName, AccountUpn, CADisplayName, CAResult

Note: in the AADSignInEventsBeta table, the ConditionalAccessPolicies is a JSON value stored as a string so the todynamic function is needed.

Note2: Since every Conditional Access policy is evaluated against each logon, the query first filters for those sign-ins where the report-only result is ‘Interrupted’ or ‘Failure’, then the policy displayname is used to narrow down the results. Starting the filter with displayName would be pointless.

Some example summarizations if you need to see the big picture (same query as above but the last line can be replaced with these ones):
View impacted users count by application:
| summarize AffectedUsersCount=dcount(AccountUpn) by Application, CADisplayName, CAResult
Same summarization in one day buckets:
| summarize AffectedUsers = dcount(AccountUpn) by bin(Timestamp,1d), CADisplayName, CAResult
List countries by result:
| summarize make_set(Country) by  CADisplayName, CAResult

Other useful feature is the Monitoring (Preview) menu in Conditional Access – Overview:

Here we have a filter option called ‘Policy evaluated’ where report-only policies are grouped under the ‘Select individual report-only policies’ section. This gives an overview but unfortunately does not list the affected users.

When a Microsoft-managed policy is opened, this chart is presented under the policy info as well.

Entra Workload Identities – Trusted Certificate Authorities (public preview)

In the November 2023 – What’s New in Microsoft Entra Identity & Security w/ Microsoft Security CxE identity episode, a public preview feature of Entra Workload ID premium license was presented (link) which was actually announced on November 9th (link). I really love the idea of restricting application key credentials to a predefined list of Certificate Authorities, this is why I thought to write some words about it.

TL;DR
– You can generate a report on current keyCredentials usage (with certificate Issuer data) using the PowerShell script below (Graph Powershell used here) [no extra license needed]
– First, you create a certificateBasedApplicationConfigurations object
– Then you can modify the defaultAppManagementPolicy or create an appManagementPolicy and apply it directly to one or more application objects (for the latter, tutorial below)
– These configurations require Entra Workload ID premium license

Reporting on application key credentials

The linked announcements are highlighting how to set the defaultAppManagementPolicy, but before setting this, you may want to know which applications are using certificates to authenticate and which CA issued these certs. This way, you can first change the certificates to the ones you trust, then you can set up the restriction(s). The following script lists these applications and the Issuer of each certificate (for the sake of simplicity, I use the Invoke-MgGraphRequest command)

#https://learn.microsoft.com/en-us/graph/api/resources/keycredential?view=graph-rest-1.0
Connect-MgGraph

##region keyauthapps
$applications_url= 'https://graph.microsoft.com/beta/applications?$top=100'
$obj_applications = $null
while ($applications_url -ne $null){
    $response = (Invoke-MgGraphRequest -Method GET -Uri $applications_url)
    $obj_applications += $response.value
    $applications_url = $response.'@odata.nextLink'
    }

#filter apps using keycredentials
$keyauthApps = $obj_applications | ? {$_.keycredentials -ne $null}
#read keycredentialsinfo
$KeyAuthApps_creds =foreach ($app in $keyauthApps){
    Invoke-MgGraphRequest -Method GET -Uri https://graph.microsoft.com/beta/applications/$($app.id)?select=keycredentials
}
##region end

##region build report - apps
$report_Apps = foreach ($cred in $KeyAuthApps_creds.keycredentials){
    $tmp_appReference = $null
    $tmp_appReference = $keyauthApps.Where({$_.keycredentials.keyId -eq $cred.keyId})
    [pscustomobject]@{
    KeyIdentifier = $cred.customKeyIdentifier
    KeyDisplayName = $cred.displayname
    KeyStartDateTime = $cred.startDateTime
    KeyEndDateTime = $cred.endDateTime
    KeyUsage = $cred.usage
    KeyType = $cred.type
    Issuer = ([system.security.cryptography.x509certificates.x509certificate2]([convert]::FromBase64String($cred.key))).Issuer
    EntityID = $tmp_appReference.id
    EntityAppId = $tmp_appReference.appid
    EntityType = "application"
    EntityDisplayName = $tmp_appReference.displayname
    }
    }
##region end

$report_Apps | Out-GridView

The result will look like this (yes, I use self-signed certificates in my demo environment 🙈):

Example result from reporting script

Note: the Issuer field may not be 100% reliable as it can be inserted manually when creating the self-signed certificate. The following method will show each certificate in the trust chain ($cred variable comes from the foreach loop above):

$tmp_cert = ([system.security.cryptography.x509certificates.x509certificate2]([convert]::FromBase64String($cred.key)))
$certChain = [System.Security.Cryptography.X509Certificates.X509Chain]::new()
$certChain.Build($tmp_cert)
$certChain.ChainElements.certificate
Example chain of a free Let’s Encrypt certificate

Building the Trusted Certificate Authority policy

To restrict application keyCredentials, the following should be kept in mind (annoncement link again):
– The policy applies only to new credentials, it won’t disable current keys
– At least one root CA needs to be declared and a chain can consist of a max of 10 objects
First, you create a certificateBasedApplicationConfigurations object (~the trusted cert chain)
Next, you can modify the defaultAppManagementPolicy to restrict all keyCredentials to this/these trusted CAs (as demonstrated on the linked page)
OR you can create a separate appManagementPolicy to restrict the trusted CA THEN this policy can be applied directly to one or more applications (steps below)

Creating the certificateBasedApplicationConfigurations object

In this example, I’m going to use Graph Explorer to create the object. As a Windows user, I will simply export my issuing CA’s (F12SUBCA01) certificate and it root CA’s (ROOTCA01) certificate to a Base-64 encoded CER file using the certlm.msc MMC snap-in, open them in Notepad and copy the contents to the Graph Explorer call’s Request body.
Find the issuing CA’s cert, then right-click – All Tasks – Export:

Select “Base-64 encoded X.509 (.CER)” as export file format.

Repeate the same steps for each certificate in the chain.
Now, open the cer files with notepad, remove the ‘—–BEGIN CERTIFICATE—–‘ and ‘—–END CERTIFICATE—–‘ lines and every line-breaks

Or you can use PowerShell:

$cert = get-childitem Cert:\LocalMachine\ca\ | ? {$_.Subject -match "F12SUBCA01"}
[convert]::ToBase64String($cert.RawData)

These values will be used in the payload sent to Microsoft Graph.
CAUTION! Use the beta endpoint for now as it is a preview feature. If you accidentally use the v1.0 endpoint, you will encouter issues (example below)

METHOD: POST
ENDPOINT: beta
URI: https://graph.microsoft.com/beta/certificateAuthoritites/certificateBasedApplicationConfigurations
REQUEST BODY:
{
  "displayName": "F12 Cert Chain",
  "description": "Allowed App certificates issued by F12SUBCA ",
  "trustedCertificateAuthorities": [{
    "isRootAuthority": true,
    "certificate": "<rootCA base64 certificate data>"
  },
  {
    "isRootAuthority": false,
    "certificate": "<subCA base64 certificate data>"
  }]
}
Creating the trustedCA configuration object

If everything was inserted correctly, the response includes an id, take a note of it. If you did not manage to take it, no problem, you can query these configurations as follows:

METHOD: GET
ENDPOINT: beta
URI: https://graph.microsoft.com/beta/directory/certificateAuthorities/certificateBasedApplicationConfigurations
List configuration objects

Note: when you dig further, the CA information can be queried for each configuration id, for example (you can omit the ‘?$select=isRootAuthority,issuer’ part if you want to check the certificate data too:

METHOD: GET
ENDPOINT: beta
URI: https://graph.microsoft.com/beta/directory/certificateAuthorities/certificateBasedApplicationConfigurations/<configurationID>/trustedCertificateAuthorities?$select=isRootAuthority,issuer

Creating the appManagementPolicy object

Now that we have the CA configuration, the next step is to create the appManagementPolicy object (if you are not going to apply it in the defaultAppManagementPolicy). The appManagementPolicy can contain restrictions for passwordCredentials and KeyCredentials. In this example, I’m going to create a policy that prohibits passwordCredentials and restricts key credentials to the trusted CA configuration defined above.

METHOD: POST
ENDPOINT: BETA
URI: https://graph.microsoft.com/v1.0/policies/appManagementPolicies
REQUEST BODY:
{

    “displayName”: “F12 AppManagementPolicy – F12SUBCA allowed only”,

    “description”: “This policy restricts application credentials to certificates issued by F12SUBCA and disables password addition “,

    “isEnabled”: true,

    “restrictions”: {

        “passwordCredentials”: [

            {

                “restrictionType”: “passwordAddition”,

                “maxLifetime”: null

            }

        ],

        “keyCredentials”: [

            {

                “restrictionType”: “trustedCertificateAuthority”,

                “certificateBasedApplicationConfigurationIds”: [

                    “0d60f78e-9916-4db2-9cee-5c8e470a19e9”

                ]

            }

        ]

    }

}

Creating the appManagementPolicy object

Take a note of the id given in the response as it will be used in the final step.

NOTE: if you accidentally use the v1.0 endpoint, you will encounter issues like this:

“Expected property ‘certificateBasedApplicationConfigurationIds’ is not present on resource of type ‘KeyCredentialConfiguration'”

Applying the policy to an application

Finally, the policy needs to be applied to an application, as follows:

METHOD: POST
ENDPOINT: BETA
URI:https://graph.microsoft.com/beta/applications/<objectID of application>/appManagementPolicies/$ref
REQUEST BODY:
{
    "@odata.id": "https://graph.microsoft.com/beta/policies/appmanagementpolicies/<appManagementPolicyID>"
}
Applying the policy to an application

The result for this application:

Uploading a certificate not issued by the trusted CA fails
Adding new client secret option is greyed out

Closing words: it is a bit cumbersome to configure these settings, but the result is purely satisfying 😊 I hope, once it goes GA it will get some graphical interface to ease the process.

Entra Workload Identites passwordLifetime policy vs. Entra ID Application Proxy – Application operation failed

Back in the days, I wrote about the Entra Workload Identities Premium licence and it’s very appealing capabilities (link). One of my favorites was the defaultAppManagementPolicy which can (also) restrict the lifetime of (new) password credentials created for an application. Well, it looks like I was too restrictive which led to the error message in the title.

TL;DR

  • When you publish an application via EntraID Appliction Proxy, the application is generated with a password credential valid for 1 year (actually 365 days + 4 minutes)
  • if you have Workload Identites Premium licence* and have set the default password credential lifetime to 12 months or less, Entra ID will not be able to create the Application Proxy application resulting in this very informative error message upon application creation: ‘Application operation failed’
  • Conclusion: when using passwordLifetime restiction in the defaultAppManagementPolicy and you intend to use AppProxy, make sure to set this lifetime to at least 366 days

Explained

When publishing a new application via Entra ID Application Proxy, I encountered this very detailed and error message: ‘Application operation failed’

Error message during AppProxy application creation

I went through some previously published applications to get an idea what may be wrong… And on the ‘Certificates & secrets’ page I had a flashback about configuring password credentials policy, then I was on the right track with a small surprise.

When an application is published with Application Proxy, an app registration is created with a password credential.

There is nothing you can do about it (as far as I know), you just live with it – it is handled by Microsoft automatically, I guess.

When you create a passwordLifetime policy specifying 12 months of lifetime, it is automatically translated to 365 days in the policy. On the next screenshot you can see my previous PATCH payload for defaultAppManagementPolicy which was followed by a GET to countercheck the settings:

passwordLifetime set to P12M which is translated to P365D

Remark: 12 months is not neccessarily 365 days (leap years!) This may cause issues in automations too when attempting to create a password valid for 1 year/12 months, which is 366 days in this case.

The point is that even if you set this lifetime to P12M (12 months) or P365D (365 days) this will prevent Application Proxy from adding the password credential, because the expiration for this password is set to T+365 days+4 minutes:

PasswordCrendetial endDateTime and startDateTime for an app published with Application Proxy

To get over this issue, modify the defaultAppManagementPolicy to allow 366 days of lifetime for a password credential:

Modifying the maxLifetime to 366 days

Now the application is successfully published:

*I used a trial licence back in those days to set up the policies…when the trial licence expires, these policies remain effective, but you will not be able to modify these settings – so you have to buy a licence to roll back these changes. So be cautious when playing with settings tied to a trial licence 🙃

Windows Web sign-in – my notes

I spent ‘some’ time exploring this web sign-in thing and thought to share the results of my research.

History

Web sign-in is available since Windows 10 1809 (link) as a private preview feature and it was restricted to TAP (temporary access pass)

There was a time when Web sign-in was not limited to TAP, so you could log in using your username + password or passwordless in the popup window (reference). But since this was a preview feature it was not recommended to use it in production.

Present

According to this documentation Web sign-in is available since Windows 11, 22H2 with KB5030310 (my experience is a bit different). According to the Authentication policy CSP documentation, Web sign-in is restricted to TAP-only:

In the ‘October 2023 – What’s New in Microsoft Entra…’ video from The Microsoft 425Show (link) the option to use Web sign-in using a federated identity or passwordless sign-in is a feature of Windows 11 23H2 (expected to be released on November 14, 2023.)

While writing this article, I installed the KB5031455 update on a test machine which enabled Web sign-in that is not restricted to TAP-only.

Anyways, I guess at some point in time this will be available for everybody, so let’s see what an IT admin should know about this new way to sign in.

Basics

  • Web sign-in is only available on AzureAD (Entra) joined devices, Hybrid joined devices can’t benefit from it
  • Internet connectivity is required as the authentication is done over the Internet
  • Web sign-in is a credential provider (like the password, PIN or smartcard provider you see on the login screen), authentication provider is still the AAD Token Issuer
    • {C5D7540A-CD51-453B-B22B-05305BA03F07} – Cloud Experience Credential Provider
    • Credential Providers can be found in registry here: Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\
      CurrentVersion\Authentication\Credential Providers
  • When using passwordless option in Web sign-in, the Primary Refresh Token will get the MFA claim since it is a multi-factor authentication mechanism.*
    • Experience: Web sign-in also allows you to use username+password login – but in this case the MFA claim is not present (so when accessing a resource to which a Conditional Access policy requires MFA, then the users will be prompted). This is because “Windows 10 or newer maintain a partitioned list of PRTs for each credential. So, there’s a PRT for each of Windows Hello for Business, password, or smartcard. This partitioning ensures that MFA claims are isolated based on the credential used, and not mixed up during token requests.” (link). So this is not an option to bypass MFA 🙂
  • In my opinion, web sign-in is not intended to be the primary authentication method for a regular use device (one device-one user). This can be used as part of the passwordless journey (before Windows Hello for Business enrollment) and/or on shared devices (where WHfB is not an option but you want to provide a passwordless solution).

* I was wondering if ‘passwordless’ is truly an MFA mechanism, because ‘passwordless’ is not neccessarily MFA by design. It is just one factor in the authentication process (‘something you own’ = the device you use to authenticate) – but in the case of ‘Microsoft Authenticator passwordless’ the second factor (biometric or PIN) is enforced by the application (link). If you remove the PIN code and/or the biometric data on the device which is already registered for passwordless, you break your passwordless registration:

Others than Basics

As per the documentation, you can turn it on using Intune or a provisioning package… what is not in the documentation (at least explicitly) is that it can be turned on using the MDM WMI Bridge provider (back in those days I used it a lot for AlwaysOn VPN deployment via SCCM).

Enable web sign-in via PowerShell

$config_Classname = “MDM_Policy_Config01_Authentication02”
$namespaceName = “root\cimv2\mdm\dmmap”

#Modify web-signIn
$newInstance = New-Object Microsoft.Management.Infrastructure.CimInstance $config_className, $namespaceName
    $property = [Microsoft.Management.Infrastructure.CimProperty]::Create("ParentID", "./Vendor/MSFT/Policy/Config", 'String', 'Key')
    $newInstance.CimInstanceProperties.Add($property)
    $property = [Microsoft.Management.Infrastructure.CimProperty]::Create("InstanceID", "Authentication", 'String', 'Key')
    $newInstance.CimInstanceProperties.Add($property)
    $property = [Microsoft.Management.Infrastructure.CimProperty]::Create("EnableWebSignIn", "1", 'SInt32', 'Property')  #set to 0 to turn it off
    $newInstance.CimInstanceProperties.Add($property)
$session = New-CimSession


if (Get-WmiObject -Class $config_Classname -Namespace $namespaceName){
    $session.ModifyInstance($namespaceName, $newInstance)
}else{
        $session.CreateInstance($namespaceName, $newInstance)
    }

The process is run under a system managed user account ‘WsiAccount’ which can be seen in the Local users and group console:

The command that is providing the sign-in experience:

“C:\Windows\SystemApps\MicrosoftWindows.Client.CBS_cw5n1h2txyewy\LogonWebHostProduct.exe” -ServerName:LogonWebHost.AppX7x0f9anf7mgkz8zh6haqj4eed5q0jcn1.mca

Entra ID sign-in logs:

When you are signing-in the traditional way you see interactive sign-in events for the user Application = Windows Sign In and Resource = Windows Azure Active Directory

When using Web sign-in, the interactive part is Application = Microsoft Authentication Broker and Resource = Device Registration Service

But what is interesting here is that straight after the interactive login, there is two additional non-interactive sign-ins, one is using the Windows-AzureAD-Authentication-Provider/1.0 UserAgent:

This timestamp is exactly the same as the AzureAdPrtUpdateTime from dsregcmd /status:

which is followed by a non-interactive Windows Sign In later:

Too much time spent for just a few valuable information… But “it’s not about the destination, it’s about the journey”.

Query Windows Hello for Business registrations and usage

So recently I was planning on requiring authentication stenghts in a Conditional Access policy – more precisely requiring Windows Hello for Business – when I realized that I’m not 100% sure that every user will meet this requirement. I wanted to make sure everybody has WHfB enrollment and that it is actively in use – so let’s see the process.

Note: I will use ‘Hello’ for simplicity, but don’t confuse Windows Hello with Windows Hello for Business – two totally different things.

TL;DR

  • Having a Hello for Business enrollment does not necessarily mean that it is actively used or that it is even a “valid” enrollment
  • Entra portal – Protection – Authentication methods – User registration details can be used to filter for those who have Hello
  • For a particular user, the Authentication methods blade can give information about Hello device registrations
  • Filtering the Sig-in logs to “Windows Sign In” application can give some overview about Hello usage
  • I wrote a script to have all this info in one ugly PowerShell object

First of all, I want to highlight this section from MS documentation:

Windows 10 or newer maintain a partitioned list of PRTs for each credential. So, there’s a PRT for each of Windows Hello for Business, password, or smartcard. This partitioning ensures that MFA claims are isolated based on the credential used, and not mixed up during token requests.

It means that when you log in to Windows using your password, the PRT used will not get the MFA claim even if the user has Hello registration on the device. And it can happen that the user reverts to password usage [eg. forgot the PIN code, the fingerprint reader didn’t recognize him/her, etc.] – and Windows tends to ask for the last credential used* – so Bye-bye Hello and hello again Password (sorry for this terrible joke).

*Update: This behaviour is controlled by the NgcFirst registry key, in the following hive: HKLM\Software\Microsoft\Windows\Currentversion\Authentication\CredentialProviders\{D6886603-9D2F-4EB2-B667-1971041FA96B}\<usersid>\NgcFirst
There is a ConsecutiveSwitchCount counter, which increases by 1 when the user logs in using a password. Also here, you can find the MaxSwitchCount DWORD which is set to 3 by default. When the user uses password login 3 times in a row, then it is considered an Opt-out, which is visible in the OptOut entry (set to 1)

User opted out from Windows Hello for Business authentication

But let’s get back to square one: when you open the Authentication methods blade on Entra, you have the User registration details which can be used to list users with Hello:

User registration details filtered to Hello registrations

Let’s open one user to see the devices registered:

Authentication methods for one user

Yes, sometimes the Detail column is not showing the computer name – however, if you click on the three dots menu and select View details, you can see the device Id and object Id – very user friedly, isn’t it?

Hello registration details

Note: when a device is deleted, the registration will remain but it will not be tied to any device

And the last piece is the sign-in log: if you filter the sign-ins to application “Windows Sign In” and open the entries, the Authentication Details will reveal the method used:

Windows Sign in event using Hello

My requirement was to have a table about each Hello registration for every user and a timestamp of the last Hello sign-in event. This is why I wrote the following script (assuming you use Graph Powershell in your environment):

#MSAL.PS module required

$tenantID = '<tenantID>'
$graphPowerShellAppId = '14d82eec-204b-4c2f-b7e8-296a70dab67e'

$token = Get-MsalToken -TenantId $tenantID -Interactive -ClientId $graphPowerShellAppId -Scopes "AuditLog.Read.All","Directory.Read.All","UserAuthenticationMethod.Read.All"
$accessToken = $token.AccessToken

 $headers = @{ 
    'Content-Type' = 'application/json'
    'Accept' = 'application/json'
    'Authorization' = "Bearer $accessToken" 
    }

#WHfB enrolledusers
Write-Host -ForegroundColor Green "Fetching information from User registration details"
$url = 'https://graph.windows.net/myorganization/activities/authenticationMethodUserDetails?$filter=((methodsRegistered/any(t:%20t%20eq%20%27windowsHelloForBusiness%27)))&$orderby=userPrincipalName%20asc&api-version=beta'
$response = Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json | % {$_.value} 

Write-host -ForegroundColor Green "Querying authentication methods"
$whfb_authInfo = $response.userprincipalname | % {
        Write-Host -ForegroundColor Yellow "Querying $_"
      $url = "https://graph.microsoft.com/v1.0/users/$($_)/authentication/methods"
    [pscustomobject]@{
    UPN = $_
    WHfBInfo = Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json | % {$_.value} | ? {$_.'@odata.type' -eq '#microsoft.graph.windowsHelloForBusinessAuthenticationMethod'}
  }
}

function Expand-WHfBMethod ($UPN,$id){
Write-Host -ForegroundColor Yellow "Expanding WHfB authentication method for $UPN"
$url = "https://graph.microsoft.com/beta/users/$($upn)/authentication/windowsHelloForBusinessMethods/$($id)/?" + '$expand=device'
#Write-Host $url -ForegroundColor Red
Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json 
}

function Search-WHfBWindowsSignIn ($UPN,$deviceid){
Write-Host -ForegroundColor Yellow "Searching WHfB Windows Sign-in event for $UPN on device $deviceid"
$url = "https://graph.microsoft.com/beta/auditLogs/signIns?" + '$filter=(userprincipalname eq' + " '" + $UPN + "') and (appid eq '38aa3b87-a06d-4817-b275-7a316988d93b')" + " and (devicedetail/deviceid eq '" + $deviceid + "')" #appId for Windows Sign-in
$response = Invoke-WebRequest -Method Get -Uri $url -Headers $headers -ErrorAction Stop | ConvertFrom-Json
$response.value | ? {$_.authenticationdetails.authenticationmethod -eq "Windows Hello for Business"} | sort createdDatetime | select -Last 1 | % {$_.createdDatetime}
}

$report_WHfb = foreach ($item in $whfb_authInfo){
    $item.whfbinfo | % {
        $whfbmethod = $null
        $whfbmethod = Expand-WHfBMethod -UPN $item.UPN -id $_.id
    [pscustomobject]@{
        UPN = $item.UPN
        DeviceDisplayName = $whfbmethod.displayname
        DevieID = $whfbmethod.device.deviceid
        HelloForBusinessMethodLastUsed = Search-WHfBWindowsSignIn -UPN $item.UPN -deviceid $whfbmethod.device.deviceid
        Enrollmentdate = $_.createdDateTime
        KeyStrenght = $_.keyStrength
    }
    }
}

$report_WHfb | ft

Example output:

Note: to find the HelloForBusinessMethodLastUsed value, the script is querying the sign-in logs which will take some time in a larger environment.

Note2: if the DeviceID field equals 00000000-0000-0000-0000-000000000000, then this is a Hello registration that is not corresponding to any Entra joined device – probably the device was deleted. You may want to review these entries and delete them.